Pet Facial Expression

TABLE OF CONTENTS
¶

  • 1. Introduction

  • 2. Import Necessaries

  • 3. EDA

    • 3.1. Define data path and dataset name
    • 3.2. Create Dataframe for the dataset
    • 3.3. Display Number of Examples in the dataset
    • 3.4. Display Number of Classes in the dataset
    • 3.5. Display count of images in each class of the dataset
    • 3.6. Visualize Each Class in the dataset
    • 3.7. Check Null values in the dataframe
    • 3.8. Visualize Null values
  • 4. Split dataframe into train, valid, and test

  • 5. Create Image Data Generator

  • 6. Visualize Training dataset

  • 7. Model Structure

    • 7.1. Generic Model Creation
    • 7.2. Define Early Stop
    • 7.3. Train model
  • 8. Evaluate Model

    • 8.1. Plot accuarcy and loss curve
    • 8.2. Model Accuarcy
    • 8.3. Get Predictions
  • 9. Save the Model

  • 10. Load Model

  • 11. AUTHOR MESSAGE

1 || Introduction¶

Steps we will go through:


Imagine a world where machines effortlessly identify cars, trucks, motorcycles, and more, solely through images. That's precisely what we're setting out to achieve. The implications are immense – from urban planning to autonomous driving, our journey into image classification holds transformative power.

Let's dive in and harness the magic of deep learning to crack the code of vehicle type recognition

we will walk through this steps:

  1. Load the data by storing each image path in a list and its corresponding label in another list
  2. Transform the lists into dataframe
  3. EDA and analyze the data for more insights
  4. Split the data into train, test and validation datasets
  5. Create Data Generator for Train, Test and validation datasets

Tensorflow Generators are very useful to Generate batches of tensor image data with real-time data augmentation.
6. Load the pretrained model, add some layers on top of its base layer and compile it
We will be using EffiecentNet, of course you can use any pretrained model you want and tune its architecture and parameters!
7. Evaluate the model by plotting acc and loss curves, plot confussion matrix and print classification report
8. Save the model to use it in production
9. Finally, Load the model and make predictions

2 || Import Necessaries¶

In [ ]:
import os
import itertools

import cv2
import numpy as np
import pandas as pd
import seaborn as sns
sns.set_style('darkgrid')
import matplotlib.pyplot as plt
import missingno as msno
from plotly.subplots import make_subplots
import plotly.graph_objects as go
from plotly.offline import iplot

from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix, classification_report

import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.models import Sequential
from tensorflow.keras.optimizers import Adam, Adamax
from tensorflow.keras.metrics import categorical_crossentropy
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.layers import Conv2D, MaxPooling2D, Flatten, Dense, Activation, Dropout, BatchNormalization
from tensorflow.keras import regularizers
from keras.callbacks import EarlyStopping, LearningRateScheduler
import numpy as np
from tensorflow.keras.preprocessing import image
from tensorflow.keras.applications.efficientnet import preprocess_input

# Ignore Warnings
import warnings
warnings.filterwarnings("ignore")

print ('modules loaded')
2023-11-24 12:29:38.292990: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
modules loaded

3 || EDA¶

3.1 || Define data path and dataset name¶

In [ ]:
train_data_dir = 'pets_facial_expression_dataset/master/train'
valid_data_dir = 'pets_facial_expression_dataset/master/valid'
test_data_dir = 'pets_facial_expression_dataset/master/test'

data_dir = 'pets_facial_expression_dataset'


ds_name = 'Pets Facial Expression'

3.2 || Create Dataframe for the dataset¶

In [ ]:
# Generate data paths with labels

def generate_data_paths(data_dir):
    
    filepaths = []
    labels = []

    folds = os.listdir(data_dir)
    for fold in folds:
        if fold == 'master':
            continue
            
        foldpath = os.path.join(data_dir, fold)
        filelist = os.listdir(foldpath)
        for file in filelist:
            fpath = os.path.join(foldpath, file)
            filepaths.append(fpath)
            labels.append(fold)
            
    return filepaths, labels


filepaths, labels = generate_data_paths(data_dir)
In [ ]:
def create_df(filepaths, labels):

    Fseries = pd.Series(filepaths, name= 'filepaths')
    Lseries = pd.Series(labels, name='labels')
    df = pd.concat([Fseries, Lseries], axis= 1)
    return df

df = create_df(filepaths, labels)
In [ ]:
df.head()
Out[ ]:
filepaths labels
0 pets_facial_expression_dataset/happy/aug-70-08... happy
1 pets_facial_expression_dataset/happy/aug-121-0... happy
2 pets_facial_expression_dataset/happy/aug-78-09... happy
3 pets_facial_expression_dataset/happy/aug-68-08... happy
4 pets_facial_expression_dataset/happy/aug-8-012... happy

3.3 || Display Number of Examples in the dataset¶

In [ ]:
def num_of_examples(df, name='df'):
    print(f"The {name} dataset has {df.shape[0]} images.")
    
num_of_examples(df, ds_name)
The Pets Facial Expression dataset has 1000 images.

3.4 || Display Number of Classes in the dataset¶

In [ ]:
def num_of_classes(df, name='df'):
    print(f"The {name} dataset has {len(df['labels'].unique())} classes")
    
num_of_classes(df, ds_name)
The Pets Facial Expression dataset has 4 classes

3.5 || Display count of images in each class of the dataset¶

In [ ]:
def classes_count(df, name='df'):
    
    print(f"The {name} dataset has: ")
    print("="*70)
    print()
    for name in df['labels'].unique():
        num_class = len(df['labels'][df['labels'] == name])
        print(f"Class '{name}' has {num_class} images")
        print('-'*70)
        
classes_count(df, ds_name)
The Pets Facial Expression dataset has: 
======================================================================

Class 'happy' has 250 images
----------------------------------------------------------------------
Class 'Sad' has 250 images
----------------------------------------------------------------------
Class 'Other' has 250 images
----------------------------------------------------------------------
Class 'Angry' has 250 images
----------------------------------------------------------------------

3.6 || Visualize Each Class in the dataset¶

In [ ]:
def cat_summary_with_graph(dataframe, col_name):
    fig = make_subplots(rows=1, cols=2,
                        subplot_titles=('Countplot', 'Percentages'),
                        specs=[[{"type": "xy"}, {'type': 'domain'}]])

    fig.add_trace(go.Bar(y=dataframe[col_name].value_counts().values.tolist(),
                         x=[str(i) for i in dataframe[col_name].value_counts().index],
                         text=dataframe[col_name].value_counts().values.tolist(),
                         textfont=dict(size=15),
                         name=col_name,
                         textposition='auto',
                         showlegend=False,
                         marker=dict(color=colors,
                                     line=dict(color='#DBE6EC',
                                               width=1))),
                  row=1, col=1)

    fig.add_trace(go.Pie(labels=dataframe[col_name].value_counts().keys(),
                         values=dataframe[col_name].value_counts().values,
                         textfont=dict(size=20),
                         textposition='auto',
                         showlegend=False,
                         name=col_name,
                         marker=dict(colors=colors)),
                  row=1, col=2)

    fig.update_layout(title={'text': col_name,
                             'y': 0.9,
                             'x': 0.5,
                             'xanchor': 'center',
                             'yanchor': 'top'},
                      template='plotly_white')

    iplot(fig)
    
    
colors = ['#494BD3', '#E28AE2', '#F1F481', '#79DB80', '#DF5F5F',
              '#69DADE', '#C2E37D', '#E26580', '#D39F49', '#B96FE3']

cat_summary_with_graph(df,'labels')

3.7 || Check Null values in the dataframe¶

In [ ]:
def check_null_values(df, name='df'):
    
    num_null_vals = sum(df.isnull().sum().values)
    
    if not num_null_vals:
        print(f"The {name} dataset has no null values")
    
    else:
        print(f"The {name} dataset has {num_null_vals} null values")
        print('-'*70)
        print('Total null values in each column:\n')
        print(df.isnull().sum())
        

check_null_values(df, ds_name)
The Pets Facial Expression dataset has no null values

3.8 || Visualize Null values¶

In [ ]:
msno.matrix(df)
plt.title('Distribution of Missing Values', fontsize=30, fontstyle='oblique');

4 ||Split dataframe into train, valid, and test¶

In [ ]:
# train dataframe
train_df, dummy_df = train_test_split(df,  train_size= 0.8, shuffle= True, random_state= 123)

# valid and test dataframe
valid_df, test_df = train_test_split(dummy_df,  train_size= 0.6, shuffle= True, random_state= 123)
In [ ]:
'''
filepaths, labels = generate_data_paths(train_data_dir)
train_df = create_df(filepaths, labels)

filepaths, labels = generate_data_paths(valid_data_dir)
valid_df = create_df(filepaths, labels)

filepaths, labels = generate_data_paths(test_data_dir)
test_df = create_df(filepaths, labels)
'''
Out[ ]:
'\nfilepaths, labels = generate_data_paths(train_data_dir)\ntrain_df = create_df(filepaths, labels)\n\nfilepaths, labels = generate_data_paths(valid_data_dir)\nvalid_df = create_df(filepaths, labels)\n\nfilepaths, labels = generate_data_paths(test_data_dir)\ntest_df = create_df(filepaths, labels)\n'
In [ ]:
num_of_examples(train_df, "Training "+ds_name)
num_of_examples(valid_df, "Validation "+ds_name)
num_of_examples(test_df, "Testing "+ds_name)
The Training Pets Facial Expression dataset has 800 images.
The Validation Pets Facial Expression dataset has 120 images.
The Testing Pets Facial Expression dataset has 80 images.
In [ ]:
num_of_classes(train_df, "Training "+ds_name)
num_of_classes(valid_df, "Validation "+ds_name)
num_of_classes(test_df, "Testing "+ds_name)
The Training Pets Facial Expression dataset has 4 classes
The Validation Pets Facial Expression dataset has 4 classes
The Testing Pets Facial Expression dataset has 4 classes
In [ ]:
classes_count(train_df, 'Training '+ds_name)
The Training Pets Facial Expression dataset has: 
======================================================================

Class 'Other' has 209 images
----------------------------------------------------------------------
Class 'Angry' has 192 images
----------------------------------------------------------------------
Class 'Sad' has 201 images
----------------------------------------------------------------------
Class 'happy' has 198 images
----------------------------------------------------------------------
In [ ]:
classes_count(valid_df, 'Validation '+ds_name)
The Validation Pets Facial Expression dataset has: 
======================================================================

Class 'Sad' has 29 images
----------------------------------------------------------------------
Class 'Other' has 28 images
----------------------------------------------------------------------
Class 'happy' has 35 images
----------------------------------------------------------------------
Class 'Angry' has 28 images
----------------------------------------------------------------------
In [ ]:
classes_count(test_df, 'Testing '+ds_name)
The Testing Pets Facial Expression dataset has: 
======================================================================

Class 'happy' has 17 images
----------------------------------------------------------------------
Class 'Sad' has 20 images
----------------------------------------------------------------------
Class 'Angry' has 30 images
----------------------------------------------------------------------
Class 'Other' has 13 images
----------------------------------------------------------------------

5 ||Create Image Data Generator¶

In [ ]:
# crobed image size
batch_size = 16
img_size = (224, 224)
channels = 3
img_shape = (img_size[0], img_size[1], channels)

# Recommended : use custom function for test data batch size, else we can use normal batch size.
ts_length = len(test_df)
test_batch_size = max(sorted([ts_length // n for n in range(1, ts_length + 1) if ts_length%n == 0 and ts_length/n <= 80]))
test_steps = ts_length // test_batch_size

# This function which will be used in image data generator for data augmentation, it just take the image and return it again.
def scalar(img):
    return img

tr_gen = ImageDataGenerator(preprocessing_function= scalar,
                           rotation_range=40,
                           width_shift_range=0.2,
                           height_shift_range=0.2,
                           brightness_range=[0.4,0.6],
                           zoom_range=0.3,
                           horizontal_flip=True,
                           vertical_flip=True)

ts_gen = ImageDataGenerator(preprocessing_function= scalar,
                           rotation_range=40,
                           width_shift_range=0.2,
                           height_shift_range=0.2,
                           brightness_range=[0.4,0.6],
                           zoom_range=0.3,
                           horizontal_flip=True,
                           vertical_flip=True)

train_gen = tr_gen.flow_from_dataframe(train_df, 
                                       x_col= 'filepaths', 
                                       y_col= 'labels', 
                                       target_size= img_size, 
                                       class_mode= 'categorical',
                                       color_mode= 'rgb', 
                                       shuffle= True, 
                                       batch_size= batch_size)

valid_gen = ts_gen.flow_from_dataframe(valid_df, 
                                       x_col= 'filepaths', 
                                       y_col= 'labels', 
                                       target_size= img_size, 
                                       class_mode= 'categorical',
                                       color_mode= 'rgb', 
                                       shuffle= True, 
                                       batch_size= batch_size)

# Note: we will use custom test_batch_size, and make shuffle= false
test_gen = ts_gen.flow_from_dataframe(test_df, 
                                      x_col= 'filepaths', 
                                      y_col= 'labels', 
                                      target_size= img_size, 
                                      class_mode= 'categorical',
                                      color_mode= 'rgb', 
                                      shuffle= False, 
                                      batch_size= test_batch_size)
Found 800 validated image filenames belonging to 4 classes.
Found 120 validated image filenames belonging to 4 classes.
Found 80 validated image filenames belonging to 4 classes.

6 ||Visualize Training dataset¶

In [ ]:
g_dict = train_gen.class_indices      # defines dictionary {'class': index}
classes = list(g_dict.keys())       # defines list of dictionary's kays (classes), classes names : string
images, labels = next(train_gen)      # get a batch size samples from the generator

plt.figure(figsize= (20, 20))

for i in range(16):
    plt.subplot(4, 4, i + 1)
    image = images[i] / 255       # scales data to range (0 - 255)
    plt.imshow(image)
    index = np.argmax(labels[i])  # get image index
    class_name = classes[index]   # get class of image
    plt.title(class_name, color= 'blue', fontsize= 12)
    plt.axis('off')
    
plt.show()

7 ||Model Structure¶

7.1 ||Generic Model Creation¶

In [ ]:
# Create Model Structure
img_size = (224, 224)
channels = 3
img_shape = (img_size[0], img_size[1], channels)
class_count = len(list(train_gen.class_indices.keys())) # to define number of classes in dense layer

# create pre-trained model (you can built on pretrained model such as :  efficientnet, VGG , Resnet )
# we will use efficientnetb3 from EfficientNet family.
base_model = tf.keras.applications.efficientnet.EfficientNetB5(include_top= False, weights= "imagenet", input_shape= img_shape, pooling= 'max')
base_model.trainable = False

model = Sequential([
    base_model,
    BatchNormalization(axis= -1, momentum= 0.99, epsilon= 0.001),
    Dense(256, activation='relu'),
    Dense(128, kernel_regularizer= regularizers.l2(l= 0.016), activity_regularizer= regularizers.l1(0.006),
                bias_regularizer= regularizers.l1(0.006), activation= 'relu'),
    Dropout(rate= 0.45, seed= 123),
    Dense(class_count, activation= 'softmax')
])

model.compile(Adamax(learning_rate= 0.001), loss= 'categorical_crossentropy', metrics= ['accuracy'])

model.summary()
Model: "sequential"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 efficientnetb5 (Functional  (None, 2048)              28513527  
 )                                                               
                                                                 
 batch_normalization (Batch  (None, 2048)              8192      
 Normalization)                                                  
                                                                 
 dense (Dense)               (None, 256)               524544    
                                                                 
 dense_1 (Dense)             (None, 128)               32896     
                                                                 
 dropout (Dropout)           (None, 128)               0         
                                                                 
 dense_2 (Dense)             (None, 4)                 516       
                                                                 
=================================================================
Total params: 29079675 (110.93 MB)
Trainable params: 562052 (2.14 MB)
Non-trainable params: 28517623 (108.79 MB)
_________________________________________________________________

7.2 ||Define Early Stop¶

In [ ]:
early_stopping = EarlyStopping(monitor='val_accuracy', 
                               patience=5, 
                               restore_best_weights=True,
                               mode='max',
                              )

def step_decay(epoch):
    
     initial_lrate = 0.1
     drop = 0.5
     epochs_drop = 10.0
     lrate = initial_lrate * math.pow(drop, math.floor((1+epoch)/epochs_drop))
     return lrate

lr_scheduler = LearningRateScheduler(step_decay)

7.3 ||Train Model¶

In [ ]:
batch_size = 16   # set batch size for training
epochs = 100   # number of all epochs in training

history = model.fit(x=train_gen,
                    epochs= epochs,
                    verbose= 1,
                    validation_data= valid_gen, 
                    validation_steps= None,
                    shuffle= False)
Epoch 1/100
50/50 [==============================] - 125s 2s/step - loss: 4.4923 - accuracy: 0.3613 - val_loss: 4.2016 - val_accuracy: 0.5167
Epoch 2/100
50/50 [==============================] - 103s 2s/step - loss: 3.8385 - accuracy: 0.4812 - val_loss: 3.7819 - val_accuracy: 0.5167
Epoch 3/100
50/50 [==============================] - 111s 2s/step - loss: 3.5774 - accuracy: 0.5200 - val_loss: 3.3832 - val_accuracy: 0.6750
Epoch 4/100
50/50 [==============================] - 101s 2s/step - loss: 3.3722 - accuracy: 0.5512 - val_loss: 3.3235 - val_accuracy: 0.6000
Epoch 5/100
50/50 [==============================] - 103s 2s/step - loss: 3.2034 - accuracy: 0.5800 - val_loss: 3.2102 - val_accuracy: 0.6167
Epoch 6/100
50/50 [==============================] - 107s 2s/step - loss: 3.0596 - accuracy: 0.6000 - val_loss: 3.0023 - val_accuracy: 0.6750
Epoch 7/100
50/50 [==============================] - 105s 2s/step - loss: 2.9141 - accuracy: 0.6450 - val_loss: 2.9779 - val_accuracy: 0.6417
Epoch 8/100
50/50 [==============================] - 101s 2s/step - loss: 2.7671 - accuracy: 0.6750 - val_loss: 2.8031 - val_accuracy: 0.6667
Epoch 9/100
50/50 [==============================] - 102s 2s/step - loss: 2.6498 - accuracy: 0.6875 - val_loss: 2.6472 - val_accuracy: 0.6333
Epoch 10/100
50/50 [==============================] - 101s 2s/step - loss: 2.5446 - accuracy: 0.6837 - val_loss: 2.5199 - val_accuracy: 0.6833
Epoch 11/100
50/50 [==============================] - 101s 2s/step - loss: 2.4355 - accuracy: 0.6837 - val_loss: 2.5377 - val_accuracy: 0.6750
Epoch 12/100
50/50 [==============================] - 101s 2s/step - loss: 2.3036 - accuracy: 0.7325 - val_loss: 2.4660 - val_accuracy: 0.6333
Epoch 13/100
50/50 [==============================] - 104s 2s/step - loss: 2.2643 - accuracy: 0.7013 - val_loss: 2.3444 - val_accuracy: 0.7000
Epoch 14/100
50/50 [==============================] - 104s 2s/step - loss: 2.1660 - accuracy: 0.7400 - val_loss: 2.2251 - val_accuracy: 0.7000
Epoch 15/100
50/50 [==============================] - 98s 2s/step - loss: 2.0352 - accuracy: 0.7538 - val_loss: 2.1050 - val_accuracy: 0.7417
Epoch 16/100
50/50 [==============================] - 98s 2s/step - loss: 1.9898 - accuracy: 0.7588 - val_loss: 2.0515 - val_accuracy: 0.7417
Epoch 17/100
50/50 [==============================] - 106s 2s/step - loss: 1.8769 - accuracy: 0.7900 - val_loss: 2.0774 - val_accuracy: 0.7083
Epoch 18/100
50/50 [==============================] - 98s 2s/step - loss: 1.8490 - accuracy: 0.7500 - val_loss: 1.9165 - val_accuracy: 0.8000
Epoch 19/100
50/50 [==============================] - 98s 2s/step - loss: 1.7424 - accuracy: 0.7987 - val_loss: 1.9567 - val_accuracy: 0.6750
Epoch 20/100
50/50 [==============================] - 97s 2s/step - loss: 1.7230 - accuracy: 0.7800 - val_loss: 1.7917 - val_accuracy: 0.7833
Epoch 21/100
50/50 [==============================] - 97s 2s/step - loss: 1.6083 - accuracy: 0.8263 - val_loss: 1.8369 - val_accuracy: 0.7167
Epoch 22/100
50/50 [==============================] - 97s 2s/step - loss: 1.5726 - accuracy: 0.8087 - val_loss: 1.6815 - val_accuracy: 0.8000
Epoch 23/100
50/50 [==============================] - 97s 2s/step - loss: 1.5338 - accuracy: 0.8163 - val_loss: 1.6656 - val_accuracy: 0.7750
Epoch 24/100
50/50 [==============================] - 98s 2s/step - loss: 1.4634 - accuracy: 0.8250 - val_loss: 1.5934 - val_accuracy: 0.8250
Epoch 25/100
50/50 [==============================] - 93s 2s/step - loss: 1.4471 - accuracy: 0.8300 - val_loss: 1.5832 - val_accuracy: 0.7667
Epoch 26/100
50/50 [==============================] - 90s 2s/step - loss: 1.3994 - accuracy: 0.8300 - val_loss: 1.6413 - val_accuracy: 0.7500
Epoch 27/100
50/50 [==============================] - 91s 2s/step - loss: 1.3770 - accuracy: 0.8213 - val_loss: 1.5554 - val_accuracy: 0.8083
Epoch 28/100
50/50 [==============================] - 90s 2s/step - loss: 1.3195 - accuracy: 0.8325 - val_loss: 1.5105 - val_accuracy: 0.7750
Epoch 29/100
50/50 [==============================] - 90s 2s/step - loss: 1.2575 - accuracy: 0.8562 - val_loss: 1.3827 - val_accuracy: 0.7917
Epoch 30/100
50/50 [==============================] - 91s 2s/step - loss: 1.2373 - accuracy: 0.8425 - val_loss: 1.4193 - val_accuracy: 0.7583
Epoch 31/100
50/50 [==============================] - 91s 2s/step - loss: 1.2424 - accuracy: 0.8338 - val_loss: 1.3519 - val_accuracy: 0.8167
Epoch 32/100
50/50 [==============================] - 90s 2s/step - loss: 1.1605 - accuracy: 0.8700 - val_loss: 1.3859 - val_accuracy: 0.8083
Epoch 33/100
50/50 [==============================] - 89s 2s/step - loss: 1.1575 - accuracy: 0.8612 - val_loss: 1.3549 - val_accuracy: 0.8000
Epoch 34/100
50/50 [==============================] - 90s 2s/step - loss: 1.1114 - accuracy: 0.8712 - val_loss: 1.2828 - val_accuracy: 0.8000
Epoch 35/100
50/50 [==============================] - 90s 2s/step - loss: 1.0912 - accuracy: 0.8700 - val_loss: 1.3489 - val_accuracy: 0.8000
Epoch 36/100
50/50 [==============================] - 91s 2s/step - loss: 1.0170 - accuracy: 0.8963 - val_loss: 1.2887 - val_accuracy: 0.7583
Epoch 37/100
50/50 [==============================] - 91s 2s/step - loss: 1.0589 - accuracy: 0.8662 - val_loss: 1.3180 - val_accuracy: 0.8000
Epoch 38/100
50/50 [==============================] - 90s 2s/step - loss: 1.0243 - accuracy: 0.8675 - val_loss: 1.1955 - val_accuracy: 0.8417
Epoch 39/100
50/50 [==============================] - 91s 2s/step - loss: 0.9775 - accuracy: 0.8712 - val_loss: 1.1880 - val_accuracy: 0.8250
Epoch 40/100
50/50 [==============================] - 91s 2s/step - loss: 0.9706 - accuracy: 0.8775 - val_loss: 1.1212 - val_accuracy: 0.8167
Epoch 41/100
50/50 [==============================] - 100s 2s/step - loss: 0.9384 - accuracy: 0.8850 - val_loss: 1.0109 - val_accuracy: 0.8583
Epoch 42/100
50/50 [==============================] - 103s 2s/step - loss: 0.9285 - accuracy: 0.8813 - val_loss: 1.1912 - val_accuracy: 0.7750
Epoch 43/100
50/50 [==============================] - 104s 2s/step - loss: 0.9547 - accuracy: 0.8562 - val_loss: 1.1259 - val_accuracy: 0.8417
Epoch 44/100
50/50 [==============================] - 113s 2s/step - loss: 0.8869 - accuracy: 0.8950 - val_loss: 1.1055 - val_accuracy: 0.8250
Epoch 45/100
50/50 [==============================] - 100s 2s/step - loss: 0.8822 - accuracy: 0.8850 - val_loss: 1.1978 - val_accuracy: 0.7917
Epoch 46/100
50/50 [==============================] - 104s 2s/step - loss: 0.8595 - accuracy: 0.8950 - val_loss: 1.0723 - val_accuracy: 0.8667
Epoch 47/100
50/50 [==============================] - 100s 2s/step - loss: 0.8127 - accuracy: 0.9112 - val_loss: 1.0931 - val_accuracy: 0.8000
Epoch 48/100
50/50 [==============================] - 97s 2s/step - loss: 0.7997 - accuracy: 0.9025 - val_loss: 1.1650 - val_accuracy: 0.7833
Epoch 49/100
50/50 [==============================] - 98s 2s/step - loss: 0.8196 - accuracy: 0.8875 - val_loss: 1.0878 - val_accuracy: 0.7667
Epoch 50/100
50/50 [==============================] - 98s 2s/step - loss: 0.7971 - accuracy: 0.9062 - val_loss: 1.0474 - val_accuracy: 0.8333
Epoch 51/100
50/50 [==============================] - 111s 2s/step - loss: 0.7903 - accuracy: 0.8938 - val_loss: 1.0944 - val_accuracy: 0.8417
Epoch 52/100
50/50 [==============================] - 101s 2s/step - loss: 0.7965 - accuracy: 0.8925 - val_loss: 1.0166 - val_accuracy: 0.8583
Epoch 53/100
50/50 [==============================] - 100s 2s/step - loss: 0.7555 - accuracy: 0.9137 - val_loss: 0.9961 - val_accuracy: 0.8417
Epoch 54/100
50/50 [==============================] - 97s 2s/step - loss: 0.7386 - accuracy: 0.9062 - val_loss: 1.0003 - val_accuracy: 0.8417
Epoch 55/100
50/50 [==============================] - 97s 2s/step - loss: 0.7162 - accuracy: 0.9150 - val_loss: 0.8775 - val_accuracy: 0.8417
Epoch 56/100
50/50 [==============================] - 97s 2s/step - loss: 0.7016 - accuracy: 0.9162 - val_loss: 0.9116 - val_accuracy: 0.8417
Epoch 57/100
50/50 [==============================] - 98s 2s/step - loss: 0.6889 - accuracy: 0.9125 - val_loss: 0.9907 - val_accuracy: 0.8250
Epoch 58/100
50/50 [==============================] - 97s 2s/step - loss: 0.6863 - accuracy: 0.9237 - val_loss: 0.8826 - val_accuracy: 0.8750
Epoch 59/100
50/50 [==============================] - 105s 2s/step - loss: 0.6804 - accuracy: 0.9175 - val_loss: 0.8464 - val_accuracy: 0.8417
Epoch 60/100
50/50 [==============================] - 104s 2s/step - loss: 0.6556 - accuracy: 0.9150 - val_loss: 0.8335 - val_accuracy: 0.8583
Epoch 61/100
50/50 [==============================] - 102s 2s/step - loss: 0.6393 - accuracy: 0.9250 - val_loss: 0.8377 - val_accuracy: 0.8500
Epoch 62/100
50/50 [==============================] - 102s 2s/step - loss: 0.6349 - accuracy: 0.9287 - val_loss: 0.8466 - val_accuracy: 0.8667
Epoch 63/100
50/50 [==============================] - 107s 2s/step - loss: 0.6261 - accuracy: 0.9212 - val_loss: 0.9799 - val_accuracy: 0.8583
Epoch 64/100
50/50 [==============================] - 99s 2s/step - loss: 0.6056 - accuracy: 0.9262 - val_loss: 0.9123 - val_accuracy: 0.8583
Epoch 65/100
50/50 [==============================] - 99s 2s/step - loss: 0.6190 - accuracy: 0.9275 - val_loss: 0.8448 - val_accuracy: 0.8917
Epoch 66/100
50/50 [==============================] - 97s 2s/step - loss: 0.6018 - accuracy: 0.9200 - val_loss: 0.8496 - val_accuracy: 0.8583
Epoch 67/100
50/50 [==============================] - 96s 2s/step - loss: 0.5803 - accuracy: 0.9312 - val_loss: 0.8681 - val_accuracy: 0.8667
Epoch 68/100
50/50 [==============================] - 96s 2s/step - loss: 0.5772 - accuracy: 0.9337 - val_loss: 0.8036 - val_accuracy: 0.9000
Epoch 69/100
50/50 [==============================] - 96s 2s/step - loss: 0.5671 - accuracy: 0.9300 - val_loss: 0.8783 - val_accuracy: 0.8500
Epoch 70/100
50/50 [==============================] - 94s 2s/step - loss: 0.5468 - accuracy: 0.9375 - val_loss: 0.9109 - val_accuracy: 0.8500
Epoch 71/100
50/50 [==============================] - 97s 2s/step - loss: 0.6007 - accuracy: 0.9237 - val_loss: 0.7520 - val_accuracy: 0.8583
Epoch 72/100
50/50 [==============================] - 96s 2s/step - loss: 0.5567 - accuracy: 0.9300 - val_loss: 0.7219 - val_accuracy: 0.8667
Epoch 73/100
50/50 [==============================] - 98s 2s/step - loss: 0.5304 - accuracy: 0.9350 - val_loss: 0.9354 - val_accuracy: 0.8500
Epoch 74/100
50/50 [==============================] - 95s 2s/step - loss: 0.5395 - accuracy: 0.9463 - val_loss: 0.8131 - val_accuracy: 0.8417
Epoch 75/100
50/50 [==============================] - 96s 2s/step - loss: 0.5327 - accuracy: 0.9262 - val_loss: 0.7592 - val_accuracy: 0.8583
Epoch 76/100
50/50 [==============================] - 96s 2s/step - loss: 0.5358 - accuracy: 0.9200 - val_loss: 0.7203 - val_accuracy: 0.8417
Epoch 77/100
50/50 [==============================] - 97s 2s/step - loss: 0.5217 - accuracy: 0.9400 - val_loss: 0.7884 - val_accuracy: 0.8583
Epoch 78/100
50/50 [==============================] - 99s 2s/step - loss: 0.5198 - accuracy: 0.9312 - val_loss: 0.8059 - val_accuracy: 0.8667
Epoch 79/100
50/50 [==============================] - 99s 2s/step - loss: 0.5205 - accuracy: 0.9200 - val_loss: 0.6963 - val_accuracy: 0.8750
Epoch 80/100
50/50 [==============================] - 104s 2s/step - loss: 0.5234 - accuracy: 0.9275 - val_loss: 0.7624 - val_accuracy: 0.8500
Epoch 81/100
50/50 [==============================] - 96s 2s/step - loss: 0.4924 - accuracy: 0.9375 - val_loss: 0.8338 - val_accuracy: 0.8917
Epoch 82/100
50/50 [==============================] - 95s 2s/step - loss: 0.4660 - accuracy: 0.9488 - val_loss: 0.7459 - val_accuracy: 0.8750
Epoch 83/100
50/50 [==============================] - 96s 2s/step - loss: 0.4567 - accuracy: 0.9650 - val_loss: 0.7903 - val_accuracy: 0.8583
Epoch 84/100
50/50 [==============================] - 122s 2s/step - loss: 0.4507 - accuracy: 0.9475 - val_loss: 0.7840 - val_accuracy: 0.8667
Epoch 85/100
50/50 [==============================] - 117s 2s/step - loss: 0.4820 - accuracy: 0.9413 - val_loss: 0.7569 - val_accuracy: 0.8583
Epoch 86/100
50/50 [==============================] - 111s 2s/step - loss: 0.4563 - accuracy: 0.9475 - val_loss: 0.7625 - val_accuracy: 0.8750
Epoch 87/100
50/50 [==============================] - 107s 2s/step - loss: 0.4588 - accuracy: 0.9475 - val_loss: 0.6946 - val_accuracy: 0.8917
Epoch 88/100
50/50 [==============================] - 102s 2s/step - loss: 0.4334 - accuracy: 0.9625 - val_loss: 0.8138 - val_accuracy: 0.8750
Epoch 89/100
50/50 [==============================] - 100s 2s/step - loss: 0.4944 - accuracy: 0.9388 - val_loss: 0.7195 - val_accuracy: 0.8917
Epoch 90/100
50/50 [==============================] - 100s 2s/step - loss: 0.4287 - accuracy: 0.9400 - val_loss: 0.7076 - val_accuracy: 0.8833
Epoch 91/100
50/50 [==============================] - 108s 2s/step - loss: 0.4330 - accuracy: 0.9525 - val_loss: 0.6616 - val_accuracy: 0.8750
Epoch 92/100
50/50 [==============================] - 138s 3s/step - loss: 0.4466 - accuracy: 0.9463 - val_loss: 0.8065 - val_accuracy: 0.8917
Epoch 93/100
50/50 [==============================] - 106s 2s/step - loss: 0.4547 - accuracy: 0.9413 - val_loss: 0.7264 - val_accuracy: 0.8500
Epoch 94/100
50/50 [==============================] - 108s 2s/step - loss: 0.4093 - accuracy: 0.9588 - val_loss: 0.7321 - val_accuracy: 0.8500
Epoch 95/100
50/50 [==============================] - 141s 3s/step - loss: 0.3969 - accuracy: 0.9625 - val_loss: 0.6576 - val_accuracy: 0.9167
Epoch 96/100
50/50 [==============================] - 115s 2s/step - loss: 0.4111 - accuracy: 0.9538 - val_loss: 0.8033 - val_accuracy: 0.8333
Epoch 97/100
50/50 [==============================] - 101s 2s/step - loss: 0.4006 - accuracy: 0.9563 - val_loss: 0.7390 - val_accuracy: 0.8750
Epoch 98/100
50/50 [==============================] - 123s 2s/step - loss: 0.4020 - accuracy: 0.9513 - val_loss: 0.6712 - val_accuracy: 0.8833
Epoch 99/100
50/50 [==============================] - 114s 2s/step - loss: 0.4063 - accuracy: 0.9500 - val_loss: 0.7979 - val_accuracy: 0.8667
Epoch 100/100
50/50 [==============================] - 109s 2s/step - loss: 0.4085 - accuracy: 0.9450 - val_loss: 0.6699 - val_accuracy: 0.8917

8 ||Evaluate Model¶

8.1 ||Plot accuarcy and loss curve ¶

In [ ]:
# Define needed variables
tr_acc = history.history['accuracy']
tr_loss = history.history['loss']
val_acc = history.history['val_accuracy']
val_loss = history.history['val_loss']
index_loss = np.argmin(val_loss)
val_lowest = val_loss[index_loss]
index_acc = np.argmax(val_acc)
acc_highest = val_acc[index_acc]
Epochs = [i+1 for i in range(len(tr_acc))]
loss_label = f'best epoch= {str(index_loss + 1)}'
acc_label = f'best epoch= {str(index_acc + 1)}'

# Plot training history

plt.figure(figsize= (20, 8))
plt.style.use('fivethirtyeight')

plt.subplot(1, 2, 1)
plt.plot(Epochs, tr_loss, 'r', label= 'Training loss')
plt.plot(Epochs, val_loss, 'g', label= 'Validation loss')
plt.scatter(index_loss + 1, val_lowest, s= 150, c= 'blue', label= loss_label)
plt.title('Training and Validation Loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()

plt.subplot(1, 2, 2)
plt.plot(Epochs, tr_acc, 'r', label= 'Training Accuracy')
plt.plot(Epochs, val_acc, 'g', label= 'Validation Accuracy')
plt.scatter(index_acc + 1 , acc_highest, s= 150, c= 'blue', label= acc_label)
plt.title('Training and Validation Accuracy')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend()

plt.tight_layout
plt.show()

8.2 ||Model Accuarcy¶

In [ ]:
ts_length = len(test_df)
test_batch_size = max(sorted([ts_length // n for n in range(1, ts_length + 1) if ts_length%n == 0 and ts_length/n <= 80]))
test_steps = ts_length // test_batch_size

train_score = model.evaluate(train_gen, steps= test_steps, verbose= 1)
valid_score = model.evaluate(valid_gen, steps= test_steps, verbose= 1)
test_score = model.evaluate(test_gen, steps= test_steps, verbose= 1)

print("Train Loss: ", train_score[0])
print("Train Accuracy: ", train_score[1])
print('-' * 20)
print("Test Loss: ", test_score[0])
print("Test Accuracy: ", test_score[1])
1/1 [==============================] - 2s 2s/step - loss: 0.2556 - accuracy: 1.0000
1/1 [==============================] - 2s 2s/step - loss: 1.2957 - accuracy: 0.6250
1/1 [==============================] - 8s 8s/step - loss: 0.4713 - accuracy: 0.9125
Train Loss:  0.2556184232234955
Train Accuracy:  1.0
--------------------
Test Loss:  0.47132110595703125
Test Accuracy:  0.9125000238418579

8.3 ||Get Prediction¶

In [ ]:
preds = model.predict_generator(test_gen)
y_pred = np.argmax(preds, axis=1)

8.4 ||Confussion Matrix¶

In [ ]:
g_dict = test_gen.class_indices
classes = list(g_dict.keys())

# Confusion matrix
cm = confusion_matrix(test_gen.classes, y_pred)

plt.figure(figsize= (10, 10))
plt.imshow(cm, interpolation= 'nearest', cmap= plt.cm.Blues)
plt.title('Confusion Matrix')
plt.colorbar()

tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation= 45)
plt.yticks(tick_marks, classes)


thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
    plt.text(j, i, cm[i, j], horizontalalignment= 'center', color= 'white' if cm[i, j] > thresh else 'black')

plt.tight_layout()
plt.ylabel('True Label')
plt.xlabel('Predicted Label')

plt.show()

8.5 ||Classification Report¶

In [ ]:
# Classification report
print(classification_report(test_gen.classes, y_pred, target_names= classes))
              precision    recall  f1-score   support

       Angry       0.97      0.93      0.95        30
       Other       0.87      1.00      0.93        13
         Sad       1.00      0.90      0.95        20
       happy       0.94      1.00      0.97        17

    accuracy                           0.95        80
   macro avg       0.94      0.96      0.95        80
weighted avg       0.95      0.95      0.95        80

9 ||Save the Model¶

In [ ]:
model.save_weights('pet_facial_emotion_efficientNet.h5')

10 ||Load the model and Predict the Inputs¶

In [ ]:
from tensorflow.keras.preprocessing import image
from tensorflow.keras.applications.efficientnet import preprocess_input

def predict_and_display(image_path, model):
    
    img = image.load_img(image_path, target_size=(224, 224))
    img_array = image.img_to_array(img)
    img_array = np.expand_dims(img_array, axis=0)
    img_array = preprocess_input(img_array)

    prediction = model.predict(img_array)
    predicted_class_index = np.argmax(prediction)
    
    class_indices = train_gen.class_indices
    class_labels = list(class_indices.keys())
    predicted_class_label = class_labels[predicted_class_index]
    
    plt.imshow(img)
    plt.axis('off')
    if predicted_class_label == 'Other':
        plt.title(f"The pet is normal")
    else:
        plt.title(f"The Pet is {predicted_class_label}")
    plt.show()

model.load_weights('pet_facial_emotion_efficientNet.h5')

class_labels = ['Angry', 'Other', 'Sad', 'Happy']

# Replace 'path_to_test_image' with the path to the image you want to test
image_path_to_test = 'pets_facial_expression_dataset/Angry/02.jpg'
predict_and_display(image_path_to_test, model)
1/1 [==============================] - 4s 4s/step
In [ ]:
image_path_to_test = 'pets_facial_expression_dataset/Sad/011.jpg'
predict_and_display(image_path_to_test, model)
1/1 [==============================] - 0s 179ms/step
In [ ]:
image_path_to_test = 'pets_facial_expression_dataset/happy/032.jpg'
predict_and_display(image_path_to_test, model)
1/1 [==============================] - 0s 226ms/step
In [ ]:
image_path_to_test = 'pets_facial_expression_dataset/Other/20.jpg'
predict_and_display(image_path_to_test, model)
1/1 [==============================] - 0s 235ms/step

11 ||Author Message¶

If you liked this Notebook, please do upvote.
If you have any questions, feel free to comment!
If you have any advice for me I will be grateful to leave it to me in the comments!
✨Best Wishes✨